5 research outputs found

    PyTomography: A Python Library for Quantitative Medical Image Reconstruction

    Full text link
    Background: There is a scarcity of open-source libraries in medical imaging dedicated to both (i) the development and deployment of novel reconstruction algorithms and (ii) support for clinical data. Purpose: To create and evaluate a GPU-accelerated, open-source, and user-friendly image reconstruction library, designed to serve as a central platform for the development, validation, and deployment of novel tomographic reconstruction algorithms. Methods: PyTomography was developed using Python and inherits the GPU-accelerated functionality of PyTorch for fast computations. The software uses a modular design that decouples the system matrix from reconstruction algorithms, simplifying the process of integrating new imaging modalities or developing novel reconstruction techniques. As example developments, SPECT reconstruction in PyTomography is validated against both vendor-specific software and alternative open-source libraries. Bayesian reconstruction algorithms are implemented and validated. Results: PyTomography is consistent with both vendor-software and alternative open source libraries for standard SPECT clinical reconstruction, while providing significant computational advantages. As example applications, Bayesian reconstruction algorithms incorporating anatomical information are shown to outperform the traditional ordered subset expectation maximum (OSEM) algorithm in quantitative image analysis. PSF modeling in PET imaging is shown to reduce blurring artifacts. Conclusions: We have developed and publicly shared PyTomography, a highly optimized and user-friendly software for quantitative image reconstruction of medical images, with a class hierarchy that fosters the development of novel imaging applications.Comment: 26 pages, 7 figure

    Left Ventricular Myocardial Dysfunction Evaluation in Thalassemia Patients Using Echocardiographic Radiomic Features and Machine Learning Algorithms.

    Get PDF
    Heart failure caused by iron deposits in the myocardium is the primary cause of mortality in beta-thalassemia major patients. Cardiac magnetic resonance imaging (CMRI) T2* is the primary screening technique used to detect myocardial iron overload, but inherently bears some limitations. In this study, we aimed to differentiate beta-thalassemia major patients with myocardial iron overload from those without myocardial iron overload (detected by T2*CMRI) based on radiomic features extracted from echocardiography images and machine learning (ML) in patients with normal left ventricular ejection fraction (LVEF > 55%) in echocardiography. Out of 91 cases, 44 patients with thalassemia major with normal LVEF (> 55%) and T2* ≤ 20 ms and 47 people with LVEF > 55% and T2* > 20 ms as the control group were included in the study. Radiomic features were extracted for each end-systolic (ES) and end-diastolic (ED) image. Then, three feature selection (FS) methods and six different classifiers were used. The models were evaluated using various metrics, including the area under the ROC curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). Maximum relevance-minimum redundancy-eXtreme gradient boosting (MRMR-XGB) (AUC = 0.73, ACC = 0.73, SPE = 0.73, SEN = 0.73), ANOVA-MLP (AUC = 0.69, ACC = 0.69, SPE = 0.56, SEN = 0.83), and recursive feature elimination-K-nearest neighbors (RFE-KNN) (AUC = 0.65, ACC = 0.65, SPE = 0.64, SEN = 0.65) were the best models in ED, ES, and ED&ES datasets. Using radiomic features extracted from echocardiographic images and ML, it is feasible to predict cardiac problems caused by iron overload

    Machine learning based readmission and mortality prediction in heart failure patients

    Get PDF
    Abstract This study intends to predict in-hospital and 6-month mortality, as well as 30-day and 90-day hospital readmission, using Machine Learning (ML) approach via conventional features. A total of 737 patients remained after applying the exclusion criteria to 1101 heart failure patients. Thirty-four conventional features were collected for each patient. First, the data were divided into train and test cohorts with a 70–30% ratio. Then train data were normalized using the Z-score method, and its mean and standard deviation were applied to the test data. Subsequently, Boruta, RFE, and MRMR feature selection methods were utilized to select more important features in the training set. In the next step, eight ML approaches were used for modeling. Next, hyperparameters were optimized using tenfold cross-validation and grid search in the train dataset. All model development steps (normalization, feature selection, and hyperparameter optimization) were performed on a train set without touching the hold-out test set. Then, bootstrapping was done 1000 times on the hold-out test data. Finally, the obtained results were evaluated using four metrics: area under the ROC curve (AUC), accuracy (ACC), specificity (SPE), and sensitivity (SEN). The RFE-LR (AUC: 0.91, ACC: 0.84, SPE: 0.84, SEN: 0.83) and Boruta-LR (AUC: 0.90, ACC: 0.85, SPE: 0.85, SEN: 0.83) models generated the best results in terms of in-hospital mortality. In terms of 30-day rehospitalization, Boruta-SVM (AUC: 0.73, ACC: 0.81, SPE: 0.85, SEN: 0.50) and MRMR-LR (AUC: 0.71, ACC: 0.68, SPE: 0.69, SEN: 0.63) models performed the best. The best model for 3-month rehospitalization was MRMR-KNN (AUC: 0.60, ACC: 0.63, SPE: 0.66, SEN: 0.53) and regarding 6-month mortality, the MRMR-LR (AUC: 0.61, ACC: 0.63, SPE: 0.44, SEN: 0.66) and MRMR-NB (AUC: 0.59, ACC: 0.61, SPE: 0.48, SEN: 0.63) models outperformed the others. Reliable models were developed in 30-day rehospitalization and in-hospital mortality using conventional features and ML techniques. Such models can effectively personalize treatment, decision-making, and wiser budget allocation. Obtained results in 3-month rehospitalization and 6-month mortality endpoints were not astonishing and further experiments with additional information are needed to fetch promising results in these endpoints

    Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance

    Get PDF
    Purpose: Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers. Materials and Methods: After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers. Results: DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time. Conclusion: Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images

    Myocardial Perfusion SPECT Imaging Radiomic Features and Machine Learning Algorithms for Cardiac Contractile Pattern Recognition

    Get PDF
    A U-shaped contraction pattern was shown to be associated with a better Cardiac resynchronization therapy (CRT) response. The main goal of this study is to automatically recognize left ventricular contractile patterns using machine learning algorithms trained on conventional quantitative features (ConQuaFea) and radiomic features extracted from Gated single-photon emission computed tomography myocardial perfusion imaging (GSPECT MPI). Among 98 patients with standard resting GSPECT MPI included in this study, 29 received CRT therapy and 69 did not (also had CRT inclusion criteria but did not receive treatment yet at the time of data collection, or refused treatment). A total of 69 non-CRT patients were employed for training, and the 29 were employed for testing. The models were built utilizing features from three distinct feature sets (ConQuaFea, radiomics, and ConQuaFea + radiomics (combined)), which were chosen using Recursive feature elimination (RFE) feature selection (FS), and then trained using seven different machine learning (ML) classifiers. In addition, CRT outcome prediction was assessed by different treatment inclusion criteria as the study’s final phase. The MLP classifier had the highest performance among ConQuaFea models (AUC, SEN, SPE = 0.80, 0.85, 0.76). RF achieved the best performance in terms of AUC, SEN, and SPE with values of 0.65, 0.62, and 0.68, respectively, among radiomic models. GB and RF approaches achieved the best AUC, SEN, and SPE values of 0.78, 0.92, and 0.63 and 0.74, 0.93, and 0.56, respectively, among the combined models. A promising outcome was obtained when using radiomic and ConQuaFea from GSPECT MPI to detect left ventricular contractile patterns by machine learning
    corecore